Advanced Approximation Algorithms ( CMU 15 - 854 B , Spring 2008 ) Lecture 20 : Embeddings into Trees and L 1 Embeddings March 27 2008
نویسندگان
چکیده
where c(E(S, S̄)) is the sum of the weights of the edges that cross the cut, and D(S, S̄) is the sum of the demands of the pairs (si, ti) that are separated by the cut. We recall that optimizing over the set of cuts is equivalent to optimizing over `1 metrics, and is NP-hard. Instead, we may optimize over the set of all metrics. In this lecture, we bound the gap introduced by this relaxation by showing how these metrics embed into `1 with low distortion. From last time, we have a theorem of Bourgain: Theorem 1.1 (Bourgain 85 [1]). Any n point metric d admits an α-distortion embedding into `p for any 1 ≤ p ≤ ∞ with α = O(log n). This embedding is into 2 dimensions, however! Fortunately, Linial, London, and Rabinovich [3] proved that it is possible to get such an embedding into online O(log n) dimensions. In this lecture, we will show a somewhat weaker result: We will achieve an α = O(log n) embedding into `1 using poly(n, dmax/dmin) dimensions, where dmax is the maximum distance between any two points according to metric d, and dmin is the minimum distance. We note that dmax/dmin could potentially be exponentially large – but we won’t worry about this detail.
منابع مشابه
Advanced Approximation Algorithms ( CMU 18 - 854 B , Spring 2008 ) Lecture 27 : Algorithms for Sparsest Cut Apr 24 , 2008
In lecture 19, we saw an LP relaxation based algorithm to solve the sparsest cut problem with an approximation guarantee of O(logn). In this lecture, we will show that the integrality gap of the LP relaxation is O(logn) and hence this is the best approximation factor one can get via the LP relaxation. We will also start developing an SDP relaxation based algorithm which provides an O( √ log n) ...
متن کاملAdvanced Approximation Algorithms ( CMU 15 - 854 B , Spring 2008 ) Lecture 25 : Hardness of Max - k CSPs and Max - Independent - Set
Recall that an instance of the Max-kCSP problem is a collection of constraints, each of which is defined over k variables. A random assignment to the variables in a constraint satisfies it with probability 1/2, so a random assignment satisfies a 1/2 fraction of the constraints in a Max-kCSP instance in expectation. This shows that the hardness result is close to optimal, since there is a trivia...
متن کاملAdvanced Approximation Algorithms ( CMU 15 - 854 B , Spring 2008 ) Lecture 10 : Group Steiner Tree problem
We will be studying the Group Steiner tree problem in this lecture. Recall that the classical Steiner tree problem is the following. Given a weighted graphG = (V,E), a subset S ⊆ V of the vertices, and a root r ∈ V , we want to find a minimum weight tree which connects all the vertices in S to r. The weights on the edges are assumed to be positive. We will now define the Group Steiner tree prob...
متن کاملAdvanced Approximation Algorithms ( CMU 15 - 854 B , Spring 2008 ) Lecture 22 : Hardness of Max - E 3 Lin April 3 , 2008
This lecture is beginning the proof of 1 − ǫ vs. 1/2 + ǫ hardness of Max-E3Lin. The basic idea is similar to previous reductions; reduce from Label-Cover using a gadget that creates 2 variables corresponding to the key vertices and 2 vertices corresponding to the label vertices, where they correspond in the usual way to {0, 1} and {0, 1}. We then want to select some subsets x, y, z of these str...
متن کاملLecture 19 : Sparsest Cut and L 1 Embeddings 25 March , 2008
We will be studying the Sparsest cut problem in this lecture. In this context we will see how metric methods help in the design of approximation algorithms. We proceed to define the problem and briefly give some motivation for studying the problem.
متن کامل